The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Sun CHOI(36hit)

21-36hit(36hit)

  • Normalizing Syntactic Structures Using Part-of-Speech Tags and Binary Rules

    Seongyong KIM  Kong-Joo LEE  Key-Sun CHOI  

     
    PAPER

      Vol:
    E86-D No:10
      Page(s):
    2049-2056

    We propose a normalization scheme of syntactic structures using a binary phrase structure grammar with composite labels. The normalization adopts binary rules so that the dependency between two sub-trees can be represented in the label of the tree. The label of a tree is composed of two attributes, each of which is extracted from each sub-tree, so that it can represent the compositional information of the tree. The composite label is generated from part-of-speech tags using an automatic labelling algorithm. Since the proposed normalization scheme is binary and uses only part-of-speech information, it can readily be used to compare the results of different syntactic analyses independently of their syntactic description and can be applied to other languages as well. It can also be used for syntactic analysis, which performs higher than the previous syntactic description for Korean corpus. We implement a tool that transforms a syntactic description into normalized one based on this proposed scheme. It can help construct a unified syntactic corpus and extract syntactic information from various types of syntactic corpus in a uniform way.

  • Shape from Focus Using Multilayer Feedforward Neural Networks

    Muhammad ASIF  Tae-Sun CHOI  

     
    LETTER-Image Processing, Image Pattern Recognition

      Vol:
    E83-D No:4
      Page(s):
    946-949

    The conventional shape from focus (SFF) methods have inaccuracies because of piecewise constant approximation of the focused image surface (FIS). We propose a more accurate scheme for SFF based on representation of three-dimensional FIS in terms of neural network weights. The neural networks are trained to learn the shape of the FIS that maximizes the focus measure.

  • Call Level and Packet Level Performance Analysis of Splitted-Rating Channel Scheme in Multimedia UMTS Networks by Level Dependent QBD Process

    Bong Dae CHOI  Dong Bi ZHU  Chang Sun CHOI  

     
    PAPER-Wireless Communication Technology

      Vol:
    E85-B No:9
      Page(s):
    1685-1697

    We propose and analyze a new efficient handoff scheme called Splitted-Rating Channel Scheme in UMTS networks, and we analyze the call level performance of splitted-rating channel scheme and then packet level performance of downlink traffic at UMTS circuit-switched networks. In order to reduce the blocking probability of originating calls and the forced termination probability of handoff calls, a splitted-rating channel scheme is applied to the multimedia UMTS networks. This multimedia network supports two classes of calls; narrowband call requiring one channel and wideband call requiring multiple channels. The channels in service for wideband call are splitted its channels for lending to originating call and handoff call according to threshold control policy. By assuming that arrivals of narrowband calls and arrivals of wideband calls are Poisson, we model the number of narrowband calls and the number of wideband calls in the one cell by Level Dependent Quasi-Birth-Death (QBD) process and obtain their joint stationary distribution. For packet level analysis, we first describe the downlink traffic from the base station to a mobile terminal in UMTS networks, and calculate the mean packet delay of a connected wideband call by using QBD analysis. Numerical examples show that our splitted-rating channel scheme reduces the blocking probability of originating call and the forced termination probability of handoff call with a little degradation of packet delay.

  • Intelligent Extraction of a Digital Watermark from a Distorted Image

    Asifullah KHAN  Syed Fahad TAHIR  Tae-Sun CHOI  

     
    LETTER-Application Information Security

      Vol:
    E91-D No:7
      Page(s):
    2072-2075

    We present a novel approach to developing Machine Learning (ML) based decoding models for extracting a watermark in the presence of attacks. Statistical characterization of the components of various frequency bands is exploited to allow blind extraction of the watermark. Experimental results show that the proposed ML based decoding scheme can adapt to suit the watermark application by learning the alterations in the feature space incurred by the attack employed.

  • Entity Summarization Based on Entity Grouping in Multilingual Projected Entity Space

    Eun-kyung KIM  Key-Sun CHOI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/06/02
      Vol:
    E100-D No:9
      Page(s):
    2138-2146

    Entity descriptions have been exponentially growing in community-generated knowledge databases, such as DBpedia. However, many of those descriptions are not useful for identifying the underlying characteristics of their corresponding entities because semantically redundant facts or triples are included in the descriptions that represent the connections between entities without any semantic properties. Entity summarization is applied to filter out such non-informative triples and meaning-redundant triples and rank the remaining informative facts within the size of the triples for summarization. This study proposes an entity summarization approach based on pre-grouping the entities that share a set of attributes that can be used to characterize the entities we want to summarize. Entities are first grouped according to projected multilingual categories that provide the multi-angled semantics of each entity into a single entity space. Key facts about the entity are then determined through in-group-based rankings. As a result, our proposed approach produced summary information of significantly better quality (p-value =1.52×10-3 and 2.01×10-3 for the top-10 and -5 summaries, respectively) than the state-of-the-art method that requires additional external resources.

  • Extracting Partial Parsing Rules from Tree-Annotated Corpus: Toward Deterministic Global Parsing

    Myung-Seok CHOI  Kong-Joo LEE  Key-Sun CHOI  Gil Chang KIM  

     
    PAPER-Natural Language Processing

      Vol:
    E88-D No:6
      Page(s):
    1248-1255

    It is not always possible to find a global parse for an input sentence owing to problems such as errors of a sentence, incompleteness of lexicon and grammar. Partial parsing is an alternative approach to respond to these problems. Partial parsing techniques try to recover syntactic information efficiently and reliably by sacrificing completeness and depth of analysis. One of the difficulties in partial parsing is how the grammar might be automatically extracted. In this paper we present a method of automatically extracting partial parsing rules from a tree-annotated corpus using the decision tree method. Our goal is deterministic global parsing using partial parsing rules, in other words, to extract partial parsing rules with higher accuracy and broader expansion. First, we define a rule template that enables to learn a subtree for a given substring, so that the resultant rules can be more specific and stricter to apply. Second, rule candidates extracted from a training corpus are enriched with contextual and lexical information using the decision tree method and verified through cross-validation. Last, we underspecify non-deterministic rules by merging substructures with ambiguity in those rules. The learned grammar is similar to phrase structure grammar with contextual and lexical information, but allows building structures of depth one or more. Thanks to automatic learning, the partial parsing rules can be consistent and domain-independent. Partial parsing with this grammar processes an input sentence deterministically using longest-match heuristics, and recursively applies rules to an input sentence. The experiments showed that the partial parser using automatically extracted rules is not only accurate and efficient but also achieves reasonable coverage for Korean.

  • Fast Motion Estimation Techniques with Adaptive Variable Search Range

    Yun-Hee CHOI  Tae-Sun CHOI  

     
    PAPER

      Vol:
    E82-A No:6
      Page(s):
    905-910

    In this paper, we present two fast motion estimation techniques with adaptive variable search range using spatial and temporal correlation of moving pictures respectively. The first technique uses a frame difference between two adjacent frames which is used as a criterion for deciding search window size. The second one uses deviation between the past and the predicted current frame motion vectors which is also used as a criterion for deciding search window size. Simulation results show that these methods reduce the number of checking points while keeping almost the same image quality as that of full search method.

  • Solution of Eigenvalue Integral Equation with Exponentially Oscillating Covariance Function

    Vitaly KOBER  Josue ALVAREZ-BORREGO  Tae Sun CHOI  

     
    LETTER-Digital Signal Processing

      Vol:
    E86-A No:10
      Page(s):
    2690-2692

    Karhunen-Loeve (KL) transform is optimal for many signal detection, communication and filtering applications. An explicit solution of the KL integral equation for a practical case when the covariance function of a stationary process is exponentially oscillating is proposed.

  • Differentiating Honeycombed Images from Normal HRCT Lung Images

    Aamir Saeed MALIK  Tae-Sun CHOI  

     
    LETTER-Biological Engineering

      Vol:
    E92-D No:5
      Page(s):
    1218-1221

    A classification method is presented for differentiating honeycombed High Resolution Computed Tomographic (HRCT) images from normal HRCT images. For successful classification of honeycombed HRCT images, a complete set of methods and algorithms is described from segmentation to extraction to feature selection to classification. Wavelet energy is selected as a feature for classification using K-means clustering. Test data of 20 patients are used to validate the method.

  • New Motion Estimation Algorithm Based on Spatial Transform and Variable Grid Size

    Yun-Hee CHOI  Tae Sun CHOI  

     
    LETTER-Image Processing, Image Pattern Recognition

      Vol:
    E84-D No:3
      Page(s):
    424-426

    Conventional spatial transform based motion estimation algorithms are not practical because of their heavy computational loads. In this paper, we proposed motion estimation method with variable grid size, which is more efficient than conventional spatial transform based methods and gives better PSNR performance than conventional BMA.

  • Fully On-Chip Current Controlled Open-Drain Output Driver for High-Bandwidth DRAMs

    Young-Hee KIM  Jong-Ki NAM  Young-Soo SOHN  Hong-June PARK  Ki-Bong KU  Jae-Kyung WEE  Joo-Sun CHOI  Choon-Sung PARK  

     
    LETTER-Integrated Electronics

      Vol:
    E82-C No:11
      Page(s):
    2101-2104

    A fully on-chip current controlled open-drain output driver using a bandgap reference current generator was designed for high bandwidth DRAMs. It reduces the overhead of receiving a digital code from an external source for the compensation of the temperature and supply voltage variations. The correct value of the current control register is updated at the end of every auto refresh cycle. The operation at the data rate up to 0.8 Gb/s was verified by SPICE simulation using a 0.22 µm triple-well CMOS technology.

  • Fast Full Search Algorithm Using Adaptive Matching Scan Based on Gradient Magnitude

    Jong Nam KIM  Tae-Sun CHOI  

     
    LETTER-Multimedia Systems

      Vol:
    E84-B No:3
      Page(s):
    694-697

    To reduce an amount of computation of full search algorithm for fast motion estimation, we propose a new and fast matching algorithm without any degradation of predicted images. The computational reduction without any degradation comes from adaptive matching scan algorithm according to the image complexity of the reference block in current frame. Experimentally, we significantly reduce the computational load compared with conventional full search algorithm.

  • Depth from Defocus Using Wavelet Transform

    Muhammad ASIF  Tae-Sun CHOI  

     
    LETTER-Image Processing, Image Pattern Recognition

      Vol:
    E87-D No:1
      Page(s):
    250-253

    We propose a new method for Depth from Defocus (DFD) using wavelet transform. Most of the existing DFD methods use inverse filtering in a transform domain to determine the measure of defocus. These methods suffer from inaccuracies in finding the frequency domain representation due to windowing and border effects. The proposed method uses wavelets that allow performing both the local analysis and windowing with variable-sized regions for images with varying textural properties. Experimental results show that the proposed method gives more accurate depth maps than the previous methods.

  • Current-Reused QVCO Based on Source-Connection Coupling

    Sung-Sun CHOI  Han-Yeol YU  Yong-Hoon KIM  

     
    BRIEF PAPER-Microwaves, Millimeter-Waves

      Vol:
    E94-C No:8
      Page(s):
    1324-1327

    This paper presents a current-reused quadrature voltage-controlled oscillator (QVCO) which adopts a source-connection coupling structure. The QVCO simultaneously achieves low phase noise and low power consumption by newly combining current-reused VCOs and coupling transistors. The measured QVCO obtains good FoM of -188.2 dBc at a frequency of 2.2 GHz with 3.96 mW power consumption.

  • List Based Zerotree Wavelet Image Coding with Two Symbols

    Tanzeem MUZAFFAR  Tae-Sun CHOI  

     
    LETTER-Image Processing, Image Pattern Recognition

      Vol:
    E87-D No:1
      Page(s):
    254-257

    This paper presents a novel wavelet compression technique to increase compression of images. Based on zerotree entropy coding method, this technique initially uses only two symbols (significant and zerotree) to compress image data for each level. Additionally, sign bit is used for newly significant coefficients to indicate them being positive or negative. Contrary to isolated zero symbols used in conventional zerotree algorithms, the proposed algorithm changes them to significant coefficients and saves its location, they are then treated just like other significant coefficients. This is done to decrease number of symbols and hence, decrease number of bits to represent the symbols used. In the end, algorithm indicates isolated zero coordinates that are used to change the value back to original during reconstruction. Noticeably high compression ratio is achieved for most of the images, without changing image quality.

  • Highly Efficient Comparator Design Automation for TIQ Flash A/D Converter

    Insoo KIM  Jincheol YOO  JongSoo KIM  Kyusun CHOI  

     
    PAPER-Physical Level Design

      Vol:
    E91-A No:12
      Page(s):
    3415-3422

    Threshold Inverter Quantization (TIQ) technique has been gaining its importance in high speed flash A/D converters due to its fast data conversion speed. It eliminates the need of resistor ladders for reference voltages generation which requires substantial power consumption. The key to TIQ comparators design is to generate 2n - 1 different sized TIQ comparators for an n-bit A/D converter. This paper presents a highly efficient TIQ comparator design methodology based on an analytical model as well as SPICE simulation experimental model. One can find any sets of TIQ comparators efficiently using the proposed method. A 6-bit TIQ A/D converter has been designed in a 0.18 µm standard CMOS technology using the proposed method, and compared to the previous measured results in order to verify the proposed methodology.

21-36hit(36hit)